16 research outputs found

    Bootstrapping Probabilistic Models of Qualitative Spatial Relations for Active Visual Object Search

    Get PDF
    In many real world applications, autonomous mobile robots are required to observe or retrieve objects in their environment, despite not having accurate estimates of the objects ’ locations. Finding objects in real-world settings is a non-trivial task, given the complexity and the dynamics of human environments. However, by understanding and exploiting the structure of such environments, e.g. where objects are commonly placed as part of everyday activities, robots can perform search tasks more efficiently and effectively than without such knowledge. In this paper we investigate how probabilistic models of qualitative spatial relations can improve the performance in object search tasks. Specifically, we learn Gaussian Mixture Models of spatial relations between object classes from descriptive statistics of real office environments. Experimental results with a range of sensor models suggest that our model improves overall performance in object search tasks.

    Perceptual abstraction and attention

    Get PDF
    This is a report on the preliminary achievements of WP4 of the IM-CleVeR project on abstraction for cumulative learning, in particular directed to: (1) producing algorithms to develop abstraction features under top-down action influence; (2) algorithms for supporting detection of change in motion pictures; (3) developing attention and vergence control on the basis of locally computed rewards; (4) searching abstract representations suitable for the LCAS framework; (5) developing predictors based on information theory to support novelty detection. The report is organized around these 5 tasks that are part of WP4. We provide a synthetic description of the work done for each task by the partners

    An Approach for Efficient Planning of Robotic Manipulation Tasks

    No full text
    Robot manipulation is a challenging task for planning as it involves a mixture of symbolic planning and geometric planning. We would like to express goals and many action effects symbolically, for example specifying a goal such as for all x, if x is a cup, then x should be on the tray, but to accomplish this we may need to plan the geometry of fitting all the cups on the tray and how to grasp, move and release the cups to achieve that geometry. In the ideal case, this could be accomplished by a fully hybrid planner that alternates between geometric and symbolic reasoning to generate a solution. However, in practice this is very complex, and the full power of this approach may only be required for a small subset of problems. Instead, we plan completely symbolically, and then attempt to generate a geometric plan by translating the symoblic predicates into geometric relationships. We then execute this plan in simulation, and if it fails, we backtrack, first in geometric space, and then if necessary in symbolic. We show that this approach, while not complete, solves a number of challenging manipulation problems, and demonstrate it running on a robotic platform.

    Learning Operators for Manipulation Planning

    No full text
    We describe a method for learning planning operators for manipulation tasks from hand-written programs to provide a high-level command interface to a robot manipulator that allows tasks to be specified simply as goals. This is made challenging by the fact that a manipulator is a hybrid system—any model of it consists of discrete variables such as “holding cup” and continuous variables such as the poses of objects and position of the robot. The approach relies on three novel techniques: the action learning from annotated code uses simulation to find PDDL action models corresponding to code fragments. To provide the geometric information needed we use supervised learning to produce a mapping from geometric to symbolic state. The mapping can also be used in reverse to produce a geometric state that makes a set of predicates true, thus allowing desired object positions to be generated during planning. Finally, during execution of the plan we use a partially observable Markov decision problem-based planner to repair the initial plan when unforeseen geometric constraints prevent actions from being executed

    Manipulation Planning using Learned Symbolic State Abstractions

    No full text
    We present an approach for planning robotic manipulation tasks that uses a learned mapping between geometric states and logical predicates. Manipulation planning, because it requires task-level and geometric reasoning, requires such a mapping to convert between the two. Consider a robot tasked with putting several cups on a tray. The robot needs to find positions for all the objects, and may need to nest one cup inside another to get them all on the tray. This requires translating back and forth between symbolic states that the planner uses such as “stacked(cup1,cup2)” and geometric states representing the positions and poses of the objects. We learn the mapping from labelled examples, and importantly learn a representation that can be used in both the forward (from geometric to symbolic) and reverse directions. This enables us to build symbolic representations of scenes the robot observes, but also to translate a desired symbolic state from a plan into a geometric state that the robot can achieve through manipulation. We also show how such a mapping can be used for efficient manipulation planning: The planner first plans symbolically, then applies the mapping to generate geometric positions that are then sent to a path planner. Keywords
    corecore